Boundedness of a Batch Gradient Method with Penalty for Feedforward Neural Networks

نویسندگان

  • HUISHENG ZHANG
  • WEI WU
  • MINGCHEN YAO
چکیده

This paper considers a batch gradient method with penalty for training feedforward neural networks. The role of the penalty term is to control the magnitude of the weights and to improve the generalization performance of the network. An usual penalty is considered, which is a term proportional to the norm of the weights. The boundedness of the weights of the network is proved. The boundedness is assumed as a precondition in an existing convergence result, and thus our result improves this convergence result. Key–Words: Batch gradient method; Feedforward neural network; Boundedness; Penalty

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Convergence of an Online Gradient Algorithm with Penalty for Two-layer Neural Networks

Online gradient algorithm has been widely used as a learning algorithm for feedforward neural networks training. Penalty is a common and popular method for improving the generalization performance of networks. In this paper, a convergence theorem is proved for the online gradient learning algorithm with penalty, a term proportional to the magnitude of the weights. The monotonicity of the error ...

متن کامل

Convergence of Batch BP Algorithm with Penalty for FNN Training

Penalty methods have been commonly used to improve the generalization performance of feedforward neural networks and to control the magnitude of the network weights. Weight boundedness and convergence results are presented for the batch BP algorithm with penalty for training feedforward neural networks with a hidden layer. A key point of the proofs is the monotonicity of the error function with...

متن کامل

Convergence of Online Gradient Method with a Penalty Term for Feedforward Neural Networks with Stochastic

Abstract Online gradient algorithm has been widely used as a learning algorithm for feedforward neural network training. In this paper, we prove a weak convergence theorem of an online gradient algorithm with a penalty term, assuming that the training examples are input in a stochastic way. The monotonicity of the error function in the iteration and the boundedness of the weight are both guaran...

متن کامل

Convergence Analysis of Multilayer Feedforward Networks Trained with Penalty Terms: a Review

Gradient descent method is one of the popular methods to train feedforward neural networks. Batch and incremental modes are the two most common methods to practically implement the gradient-based training for such networks. Furthermore, since generalization is an important property and quality criterion of a trained network, pruning algorithms with the addition of regularization terms have been...

متن کامل

Convergence of online gradient method for feedforward neural networks with smoothing L1/2 regularization penalty

Minimization of the training regularization term has been recognized as an important objective for sparse modeling and generalization in feedforward neural networks. Most of the studies so far have been focused on the popular L2 regularization penalty. In this paper, we consider the convergence of online gradient method with smoothing L1/2 regularization term. For normal L1/2 regularization, th...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007